Assignment 4¶

In [ ]:
import networkx as nx
import pandas as pd
import numpy as np
import pickle

Part 1 - Random Graph Identification¶

For the first part of this assignment you will analyze randomly generated graphs and determine which algorithm created them.

In [ ]:
G1 = nx.read_gpickle("assets/A4_P1_G1")
G2 = nx.read_gpickle("assets/A4_P1_G2")
G3 = nx.read_gpickle("assets/A4_P1_G3")
G4 = nx.read_gpickle("assets/A4_P1_G4")
G5 = nx.read_gpickle("assets/A4_P1_G5")
P1_Graphs = [G1, G2, G3, G4, G5]

`P1_Graphs` is a list containing 5 networkx graphs. Each of these graphs were generated by one of three possible algorithms: * Preferential Attachment (`'PA'`) * Small World with low probability of rewiring (`'SW_L'`) * Small World with high probability of rewiring (`'SW_H'`)

Anaylze each of the 5 graphs using any methodology and determine which of the three algorithms generated each graph.

The graph_identification function should return a list of length 5 where each element in the list is either 'PA', 'SW_L', or 'SW_H'.

In [ ]:
def graph_identification():
    methods = []
    v_clust = []
    v_short = []
    v_avg_degree = []
    v_global = []
    for G in P1_Graphs:
        clustering = nx.average_clustering(G)
        shortest_av_path = nx.average_shortest_path_length(G)
        avg_degree = sum(dict(G.degree()).values()) / len(G)
        global_clustering = nx.transitivity(G)
        
        v_clust.append(clustering)
        v_short.append(shortest_av_path)
        v_avg_degree.append(avg_degree)
        v_global.append(global_clustering)
        methods = ['PA', 'SW_L', 'SW_L', 'PA', 'SW_H']
    return methods
In [ ]:
ans_one = graph_identification()
assert type(ans_one) == list, "You must return a list"

Part 2 - Company Emails¶

For the second part of this assignment you will be working with a company's email network where each node corresponds to a person at the company, and each edge indicates that at least one email has been sent between two people.

The network also contains the node attributes Department and ManagmentSalary.

Department indicates the department in the company which the person belongs to, and ManagmentSalary indicates whether that person is receiving a managment position salary.

In [ ]:
G = pickle.load(open('assets/email_prediction_NEW.txt', 'rb'))

print(f"Graph with {len(nx.nodes(G))} nodes and {len(nx.edges(G))} edges")
Graph with 1005 nodes and 16706 edges

Part 2A - Salary Prediction¶

Using network G, identify the people in the network with missing values for the node attribute ManagementSalary and predict whether or not these individuals are receiving a managment position salary.

To accomplish this, you will need to create a matrix of node features of your choice using networkx, train a sklearn classifier on nodes that have ManagementSalary data, and predict a probability of the node receiving a managment salary for nodes where ManagementSalary is missing.

Your predictions will need to be given as the probability that the corresponding employee is receiving a managment position salary.

The evaluation metric for this assignment is the Area Under the ROC Curve (AUC).

Your grade will be based on the AUC score computed for your classifier. A model which with an AUC of 0.75 or higher will recieve full points.

Using your trained classifier, return a Pandas series of length 252 with the data being the probability of receiving managment salary, and the index being the node id.

Example:

    1       1.0
    2       0.0
    5       0.8
    8       1.0
        ...
    996     0.7
    1000    0.5
    1001    0.0
    Length: 252, dtype: float64
In [ ]:
list(G.nodes(data=True))[:5] # print the first 5 nodes
Out[ ]:
[(0, {'Department': 1, 'ManagementSalary': 0.0}),
 (1, {'Department': 1, 'ManagementSalary': nan}),
 (581, {'Department': 3, 'ManagementSalary': 0.0}),
 (6, {'Department': 25, 'ManagementSalary': 1.0}),
 (65, {'Department': 4, 'ManagementSalary': nan})]
In [ ]:
def salary_predictions():
    import networkx as nx
    import pandas as pd
    import numpy as np
    import pickle
    from sklearn import linear_model
    from sklearn.metrics import roc_auc_score
    from sklearn.model_selection import GridSearchCV
    df = pd.DataFrame(index=G.nodes())

    # df['Department'] = pd.Series(nx.get_node_attributes(G, 'Department'))
    df['ManagementSalary'] = pd.Series(nx.get_node_attributes(G, 'ManagementSalary'))

    ### get node based features
    df['clustering'] = pd.Series(nx.clustering(G))
    df['degree'] = pd.Series(G.degree())
    df['close_Cent'] = pd.Series(nx.closeness_centrality(G))
    df['Btwn_Cent'] = pd.Series(nx.betweenness_centrality(G, normalized=True,endpoints=False))


    df_test = df[pd.isna(df['ManagementSalary'])]
    df_train = df[df['ManagementSalary'].notna()]

    X_train = df_train.drop(['ManagementSalary'], axis = 1)
    y_train = df_train['ManagementSalary']

    X_test = df_test.drop(['ManagementSalary'], axis = 1)
    y_test = df_test['ManagementSalary']

    # Handle tuples (if needed) and ensure numerical data
    # Example: Let's say we want to replace tuples with the mean of their elements
    for column in X_train.columns:
        if X_train[column].apply(lambda x: isinstance(x, tuple)).any():
            X_train[column] = X_train[column].apply(lambda x: sum(x) / len(x))
            X_test[column] = X_test[column].apply(lambda x: sum(x) / len(x))

    LG = linear_model.LogisticRegression()
    grid = {'C': np.power(10.0, np.arange(-10, 10, 2))}
    clf = GridSearchCV(LG, grid, scoring='roc_auc', cv=5)

    clf.fit(X_train.values, y_train.values)

    print("best_train_score:{}".format(clf.best_score_))

    print("best_estimator:{}".format(clf.best_estimator_))

    return pd.Series(clf.best_estimator_.predict_proba(X_test.values)[:,1], index=df_test.index)
In [ ]:
ans_salary_preds = salary_predictions()
assert type(ans_salary_preds) == pd.core.series.Series, "You must return a Pandas series"
assert len(ans_salary_preds) == 252, "The series must be of length 252"
best_train_score:0.9348708902329237
best_estimator:LogisticRegression(C=100.0)
c:\Users\Don\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\linear_model\_logistic.py:458: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
  n_iter_i = _check_optimize_result(

Part 2B - New Connections Prediction¶

For the last part of this assignment, you will predict future connections between employees of the network. The future connections information has been loaded into the variable future_connections. The index is a tuple indicating a pair of nodes that currently do not have a connection, and the Future Connection column indicates if an edge between those two nodes will exist in the future, where a value of 1.0 indicates a future connection.

In [ ]:
future_connections = pd.read_csv('assets/Future_Connections.csv', index_col=0, converters={0: eval})
future_connections.head(10)
Out[ ]:
Future Connection
(6, 840) 0.0
(4, 197) 0.0
(620, 979) 0.0
(519, 872) 0.0
(382, 423) 0.0
(97, 226) 1.0
(349, 905) 0.0
(429, 860) 0.0
(309, 989) 0.0
(468, 880) 0.0

Using network G and future_connections, identify the edges in future_connections with missing values and predict whether or not these edges will have a future connection.

To accomplish this, you will need to:

  1. Create a matrix of features of your choice for the edges found in future_connections using Networkx
  2. Train a sklearn classifier on those edges in future_connections that have Future Connection data
  3. Predict a probability of the edge being a future connection for those edges in future_connections where Future Connection is missing.

Your predictions will need to be given as the probability of the corresponding edge being a future connection.

The evaluation metric for this assignment is the Area Under the ROC Curve (AUC).

Your grade will be based on the AUC score computed for your classifier. A model which with an AUC of 0.75 or higher will recieve full points.

Using your trained classifier, return a series of length 122112 with the data being the probability of the edge being a future connection, and the index being the edge as represented by a tuple of nodes.

Example:

    (107, 348)    0.35
    (542, 751)    0.40
    (20, 426)     0.55
    (50, 989)     0.35
              ...
    (939, 940)    0.15
    (555, 905)    0.35
    (75, 101)     0.65
    Length: 122112, dtype: float64
In [ ]:
def new_connections_predictions():
    G = pickle.load(open('assets/email_prediction_NEW.txt', 'rb'))
    future_connections = pd.read_csv('assets/Future_Connections.csv', index_col=0, converters={0: eval})

    pa_preds = nx.preferential_attachment(G,future_connections.index)
    jc_preds = nx.jaccard_coefficient(G,future_connections.index)

    pa_my_l = [] # preferntial attachment
    pa_my_val = []

    jc_my_l = [] # jacard attachment
    jc_my_val = []

    my_dfs = [pa_preds, jc_preds]
    count = 0
    for my_df in my_dfs:
        if count == 0:
            for u,v,p in my_df:
                pa_my_l.append((u,v))
                pa_my_val.append(p)
        if count > 0:
            for u,v,p in my_df:
                jc_my_l.append((u,v))
                jc_my_val.append(p)
        count += 1

    # write a code to check if the two nodes are in the same or a different department, if so add either a one or a zero

    pa_preds = nx.preferential_attachment(G,future_connections.index)

    same_dep = []
    for u,v,p in pa_preds:
        if list(G.nodes[u].values())[0] == list(G.nodes[v].values())[0]:
            same_dep.append(1)
            va1 = list(G.nodes[u].values())[0]
            va2 = list(G.nodes[v].values())[0]
            #print(str(va1)+ "and" + str(va2))
        else:
            same_dep.append(0)

    # get the transformed data set 
    df_pa = pd.DataFrame({"pa_score":pa_my_val, "same_depart": same_dep}, index= pa_my_l)
    df_jc = pd.DataFrame({"jc_score":jc_my_val}, index= jc_my_l)            


    df_score = pd.merge(df_jc, df_pa, left_index=True, right_index=True)
    df_fin = pd.merge(future_connections, df_score, left_index=True, right_index=True)

    # machine learning

    # train test set
    test_X = df_fin[df_fin["Future Connection"].isnull()].iloc[:,[1,2,3]]
    train_X = df_fin[df_fin["Future Connection"].notnull()].iloc[:,[0,1,2,3]]
    train_X["Future Connection"] = train_X["Future Connection"].astype(int)


    # going with a logistic regression
    from sklearn.linear_model import LogisticRegression
    reg = LogisticRegression().fit(train_X.iloc[:,[1,2,3]], train_X.iloc[:,0])
    res = reg.predict(test_X.iloc[:,[0,1,2]])


    fin = reg.predict_proba(test_X.iloc[:,[0,1,2]])
    te = [(x[1]) for x in fin]
    res = pd.Series(te, index= test_X.index.values)
    return res
In [ ]:
ans_prob_preds = new_connections_predictions()
assert type(ans_prob_preds) == pd.core.series.Series, "You must return a Pandas series"
assert len(ans_prob_preds) == 122112, "The series must be of length 122112"